domain-invariant representation
- Asia > Middle East > Jordan (0.04)
- Asia > China (0.04)
- Research Report > Experimental Study (0.46)
- Research Report > New Finding (0.46)
- Asia > Middle East > Jordan (0.04)
- Asia > China (0.04)
- North America > Canada (0.04)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.28)
- North America > United States > California > Los Angeles County > Long Beach (0.04)
- Asia > Vietnam > Hanoi > Hanoi (0.04)
- Asia > Middle East > Jordan (0.04)
Domain Adaptation with Conditional Distribution Matching and Generalized Label Shift
Adversarial learning has demonstrated good performance in the unsupervised domain adaptation setting, by learning domain-invariant representations. However, recent work has shown limitations of this approach when label distributions differ between the source and target domains. In this paper, we propose a new assumption, \textit{generalized label shift} ($\glsa$), to improve robustness against mismatched label distributions.
- Asia > Middle East > Jordan (0.04)
- Asia > China (0.04)
- Research Report > Experimental Study (0.46)
- Research Report > New Finding (0.46)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.14)
- North America > United States > California > Los Angeles County > Long Beach (0.04)
- Asia > Vietnam > Hanoi > Hanoi (0.04)
- Asia > Middle East > Jordan (0.04)
- Asia > Middle East > Jordan (0.04)
- Asia > China (0.04)
- North America > Canada (0.04)
- Oceania > Australia (0.04)
- North America > United States (0.04)
- Asia > Vietnam (0.04)
Performative Drift Resistant Classification Using Generative Domain Adversarial Networks
Makowski, Maciej, Gower-Winter, Brandon, Krempl, Georg
Performative Drift is a special type of Concept Drift that occurs when a model's predictions influence the future instances the model will encounter. In these settings, retraining is not always feasible. In this work, we instead focus on drift understanding as a method for creating drift-resistant classifiers. To achieve this, we introduce the Generative Domain Adversarial Network (GDAN) which combines both Domain and Generative Adversarial Networks. Using GDAN, domain-invariant representations of incoming data are created and a generative network is used to reverse the effects of performative drift. Using semi-real and synthetic data generators, we empirically evaluate GDAN's ability to provide drift-resistant classification. Initial results are promising with GDAN limiting performance degradation over several timesteps. Additionally, GDAN's generative network can be used in tandem with other models to limit their performance degradation in the presence of performative drift. Lastly, we highlight the relationship between model retraining and the unpredictability of performative drift, providing deeper insights into the challenges faced when using traditional Concept Drift mitigation strategies in the performative setting.